Skip torch random ops in CoreML partitioner#19246
Skip torch random ops in CoreML partitioner#19246john-rocky wants to merge 2 commits intopytorch:mainfrom
Conversation
coremltools' converter fails the input-count check for the torch random ops (rand / randn / rand_like / randn_like / randint / randint_like) and aborts with an internal error during CoreML compilation. Reject them in the partitioner so they fall back to the portable backend. Fixes pytorch#11722.
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19246
Note: Links to docs will display an error until the docs builds have been completed.
|
|
Hi @john-rocky! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
This PR needs a
|
Summary
coremltools' MIL converter fails the input-count check for the torch random
ops (
rand/randn/rand_like/randn_like/randint/randint_like) and aborts during compilation with an internal error(see issue for the full traceback).
Add these ops to
should_override_supportso the partitioner refuses todelegate them and they fall back to the portable backend, the same way
acosh/asinhare handled today.Fixes #11722.
Test plan
Added
test_random_ops_are_skippedwhich lowers a model that addstorch.randn+torch.randoutputs and asserts the random ops remain inthe top-level graph (not delegated). Also reproduced the original
randn_likerepro from the issue and confirmed it now lowers withoutcrashing:
Authored with Claude.